7 research outputs found

    Automatic segmentation and reconstruction of traffic accident scenarios from mobile laser scanning data

    Get PDF
    Virtual reconstruction of historic sites, planning of restorations and attachments of new building parts, as well as forest inventory are few examples of fields that benefit from the application of 3D surveying data. Originally using 2D photo based documentation and manual distance measurements, the 3D information obtained from multi camera and laser scanning systems realizes a noticeable improvement regarding the surveying times and the amount of generated 3D information. The 3D data allows a detailed post processing and better visualization of all relevant spatial information. Yet, for the extraction of the required information from the raw scan data and for the generation of useable visual output, time-consuming, complex user-based data processing is still required, using the commercially available 3D software tools. In this context, the automatic object recognition from 3D point cloud and depth data has been discussed in many different works. The developed tools and methods however, usually only focus on a certain kind of object or the detection of learned invariant surface shapes. Although the resulting methods are applicable for certain practices of data segmentation, they are not necessarily suitable for arbitrary tasks due to the varying requirements of the different fields of research. This thesis presents a more widespread solution for automatic scene reconstruction from 3D point clouds, targeting street scenarios, specifically for the task of traffic accident scene analysis and documentation. The data, obtained by sampling the scene using a mobile scanning system is evaluated, segmented, and finally used to generate detailed 3D information of the scanned environment. To realize this aim, this work adapts and validates various existing approaches on laser scan segmentation regarding the application on accident relevant scene information, including road surfaces and markings, vehicles, walls, trees and other salient objects. The approaches are therefore evaluated regarding their suitability and limitations for the given tasks, as well as for possibilities concerning the combined application together with other procedures. The obtained knowledge is used for the development of new algorithms and procedures to allow a satisfying segmentation and reconstruction of the scene, corresponding to the available sampling densities and precisions. Besides the segmentation of the point cloud data, this thesis presents different visualization and reconstruction methods to achieve a wider range of possible applications of the developed system for data export and utilization in different third party software tools

    Automatic segmentation and reconstruction of traffic accident scenarios from mobile laser scanning data

    Get PDF
    Virtual reconstruction of historic sites, planning of restorations and attachments of new building parts, as well as forest inventory are few examples of fields that benefit from the application of 3D surveying data. Originally using 2D photo based documentation and manual distance measurements, the 3D information obtained from multi camera and laser scanning systems realizes a noticeable improvement regarding the surveying times and the amount of generated 3D information. The 3D data allows a detailed post processing and better visualization of all relevant spatial information. Yet, for the extraction of the required information from the raw scan data and for the generation of useable visual output, time-consuming, complex user-based data processing is still required, using the commercially available 3D software tools. In this context, the automatic object recognition from 3D point cloud and depth data has been discussed in many different works. The developed tools and methods however, usually only focus on a certain kind of object or the detection of learned invariant surface shapes. Although the resulting methods are applicable for certain practices of data segmentation, they are not necessarily suitable for arbitrary tasks due to the varying requirements of the different fields of research. This thesis presents a more widespread solution for automatic scene reconstruction from 3D point clouds, targeting street scenarios, specifically for the task of traffic accident scene analysis and documentation. The data, obtained by sampling the scene using a mobile scanning system is evaluated, segmented, and finally used to generate detailed 3D information of the scanned environment. To realize this aim, this work adapts and validates various existing approaches on laser scan segmentation regarding the application on accident relevant scene information, including road surfaces and markings, vehicles, walls, trees and other salient objects. The approaches are therefore evaluated regarding their suitability and limitations for the given tasks, as well as for possibilities concerning the combined application together with other procedures. The obtained knowledge is used for the development of new algorithms and procedures to allow a satisfying segmentation and reconstruction of the scene, corresponding to the available sampling densities and precisions. Besides the segmentation of the point cloud data, this thesis presents different visualization and reconstruction methods to achieve a wider range of possible applications of the developed system for data export and utilization in different third party software tools

    Automatic segmentation and reconstruction of traffic accident scenarios from mobile laser scanning data

    No full text
    Virtual reconstruction of historic sites, planning of restorations and attachments of new building parts, as well as forest inventory are few examples of fields that benefit from the application of 3D surveying data. Originally using 2D photo based documentation and manual distance measurements, the 3D information obtained from multi camera and laser scanning systems realizes a noticeable improvement regarding the surveying times and the amount of generated 3D information. The 3D data allows a detailed post processing and better visualization of all relevant spatial information. Yet, for the extraction of the required information from the raw scan data and for the generation of useable visual output, time-consuming, complex user-based data processing is still required, using the commercially available 3D software tools. In this context, the automatic object recognition from 3D point cloud and depth data has been discussed in many different works. The developed tools and methods however, usually only focus on a certain kind of object or the detection of learned invariant surface shapes. Although the resulting methods are applicable for certain practices of data segmentation, they are not necessarily suitable for arbitrary tasks due to the varying requirements of the different fields of research. This thesis presents a more widespread solution for automatic scene reconstruction from 3D point clouds, targeting street scenarios, specifically for the task of traffic accident scene analysis and documentation. The data, obtained by sampling the scene using a mobile scanning system is evaluated, segmented, and finally used to generate detailed 3D information of the scanned environment. To realize this aim, this work adapts and validates various existing approaches on laser scan segmentation regarding the application on accident relevant scene information, including road surfaces and markings, vehicles, walls, trees and other salient objects. The approaches are therefore evaluated regarding their suitability and limitations for the given tasks, as well as for possibilities concerning the combined application together with other procedures. The obtained knowledge is used for the development of new algorithms and procedures to allow a satisfying segmentation and reconstruction of the scene, corresponding to the available sampling densities and precisions. Besides the segmentation of the point cloud data, this thesis presents different visualization and reconstruction methods to achieve a wider range of possible applications of the developed system for data export and utilization in different third party software tools

    Functional evaluation of transplanted kidneys with diffusion-weighted and BOLD MR imaging: initial experience

    No full text
    PURPOSE: To prospectively evaluate feasibility and reproducibility of diffusion-weighted (DW) and blood oxygenation level-dependent (BOLD) magnetic resonance (MR) imaging in patients with renal allografts, as compared with these features in healthy volunteers with native kidneys. MATERIALS AND METHODS: The local ethics committee approved the study protocol; patients provided written informed consent. Fifteen patients with a renal allograft and in stable condition (nine men, six women; age range, 20-67 years) and 15 age- and sex-matched healthy volunteers underwent DW and BOLD MR imaging. Seven patients with renal allografts were examined twice to assess reproducibility of results. DW MR imaging yielded a total apparent diffusion coefficient including diffusion and microperfusion (ADC(tot)), as well as an ADC reflecting predominantly pure diffusion (ADC(D)) and the perfusion fraction. R2* of BOLD MR imaging enabled the estimation of renal oxygenation. Statistical analysis was performed, and analysis of variance was used for repeated measurements. Coefficients of variation between and within subjects were calculated to assess reproducibility. RESULTS: In patients, ADC(tot), ADC(D), and perfusion fraction were similar in the cortex and medulla. In volunteers, values in the medulla were similar to those in the cortex and medulla of patients; however, values in the cortex were higher than those in the medulla (P < .05). Medullary R2* was higher than cortical R2* in patients (12.9 sec(-1) +/- 2.1 [standard deviation] vs 11.0 sec(-1) +/- 0.6, P < .007) and volunteers (15.3 sec(-1) +/- 1.1 vs 11.5 sec(-1) +/- 0.5, P < .0001). However, medullary R2* was lower in patients than in volunteers (P < .004). Increased medullary R2* was paralleled by decreased diffusion in patients with allografts. A low coefficient of variation in the cortex and medulla within subjects was obtained for ADC(tot), ADC(D), and R2* (<5.2%), while coefficient of variation within subjects was higher for perfusion fraction (medulla, 15.1%; cortex, 8.6%). Diffusion and perfusion indexes correlated significantly with serum creatinine concentrations. CONCLUSION: DW and BOLD MR imaging are feasible and reproducible in patients with renal allografts

    CT patterns of fungal pulmonary infections of the lung: Comparison of standard-dose and simulated low-dose CT

    No full text
    To assess the effect of radiation dose reduction on the appearance and visual quantification of specific CT patterns of fungal infection in immuno-compromised patients

    Die Vollstreckung von Nichtgeldleistungstiteln Durch indirekten Zwang im Schweizerischen Zivilverfahrensrecht (Astreinte and Imprisonment for Debt in Switzerland)

    No full text
    corecore